视觉任务的输出格式和相关内容差异很大,因此很难以相同的结构处理它们。一个主要障碍在于对象级别的视觉任务中的高维输出。在本文中,我们提出了一个以对象为中心的视觉框架OBJ2Seq。 OBJ2Seq将对象作为基本单元,并将大多数对象级的视觉任务视为对象的序列生成问题。因此,这些视觉任务可以分为两个步骤。首先识别给定类别的对象,然后为每个对象生成一个序列。输出序列的定义对于不同的任务有所不同,并且通过将这些序列与地面真相目标匹配来监督模型。 OBJ2SEQ能够灵活地确定输入类别以满足自定义要求,并可以轻松扩展到不同的视觉任务。在对MS Coco进行实验时,OBJ2SEQ在对象检测时可获得45.7%的AP,多标签分类的89.0%AP和人类姿势估计的65.0%AP。这些结果证明了其通常应用于不同视觉任务的潜力。代码已在以下网址提供:https://github.com/casia-iva-lab/obj2seq。
translated by 谷歌翻译
从计算机视觉的频率的角度来看,以前的无监督域适应方法无法处理跨域问题。可以将不同域的图像或特征地图分解为低频组件和高频组件。本文提出了这样一个假设,即低频信息是更域的不变性,而高频信息包含与域相关的信息。因此,我们引入了一种名为低频模块(LFM)的方法,以提取域不变特征表示。 LFM由数字高斯低通滤波器构建。我们的方法易于实施,并且不引入额外的超参数。我们设计了两种有效的方法来利用LFM进行域的适应性,我们的方法与其他现有方法互补,并作为可以与这些方法结合使用的插件单元。实验结果表明,我们的LFM优于各种计算机视觉任务的最先进方法,包括图像分类和对象检测。
translated by 谷歌翻译
艺术文本识别是一项极具挑战性的任务,具有广泛的应用程序。但是,当前场景文本识别方法主要集中于不规则文本,而未专门探讨艺术文本。艺术文本识别的挑战包括具有特殊设计的字体和效果的各种外观,字符之间的复杂连接和重叠以及背景模式的严重干扰。为了减轻这些问题,我们建议在三个层面上识别艺术文本。首先,考虑到角结构对外观和形状的稳健性,使用角点指导角色内部特征的提取。通过这种方式,角点的离散性切断了字符之间的连接,它们的稀疏性改善了背景干扰的稳健性。其次,我们设计了一个字符对比损失,以模拟字符级别的特征,从而改善了字符分类的特征表示。第三,我们利用变形金刚在图像级别上学习全局功能,并在角落跨注意机制的帮助下对角点的全球关系进行建模。此外,我们提供了一个艺术文本数据集来基准表演。实验结果验证了我们提出的方法在艺术文本识别方面的显着优势,并在几个模糊和透视数据集上实现了最先进的性能。
translated by 谷歌翻译
字体遍布文档普遍存在,有各种风格。它们以本机向量格式表示或光栅化以产生固定分辨率图像。在第一种情况下,非标准表示可防止受益于最新网络架构进行神经表示;虽然在后一种情况下,在通过网络编码时,光栅化表示导致数据保真度丢失,作为像边缘和角落的字体特定的不连续性难以使用神经网络代表。基于观察到复杂字体可以通过一组更简单的占用函数的叠加来表示,我们介绍\ texit {multi-inclicicits}以将字体表示为置换不变集的学习隐含功能,而不会丢失特征(例如,棱角)。然而,虽然多种含义本地保护字体特征,但以地面真理多通道信号的形式获得监控是本身的问题。相反,我们提出了如何只用本地监督培训这种表示,而建议的神经架构直接发现字体系列的全球一致的多型多种含义。我们广泛地评估了各种任务的建议代表,包括重建,插值和综合,以证明具有现有替代品的明显优势。另外,表示自然地启用字形完成,其中单个特征字体用于在目标样式中综合整个字体系列。
translated by 谷歌翻译
Unsupervised image registration commonly adopts U-Net style networks to predict dense displacement fields in the full-resolution spatial domain. For high-resolution volumetric image data, this process is however resource intensive and time-consuming. To tackle this problem, we propose the Fourier-Net, replacing the expansive path in a U-Net style network with a parameter-free model-driven decoder. Specifically, instead of our Fourier-Net learning to output a full-resolution displacement field in the spatial domain, we learn its low-dimensional representation in a band-limited Fourier domain. This representation is then decoded by our devised model-driven decoder (consisting of a zero padding layer and an inverse discrete Fourier transform layer) to the dense, full-resolution displacement field in the spatial domain. These changes allow our unsupervised Fourier-Net to contain fewer parameters and computational operations, resulting in faster inference speeds. Fourier-Net is then evaluated on two public 3D brain datasets against various state-of-the-art approaches. For example, when compared to a recent transformer-based method, i.e., TransMorph, our Fourier-Net, only using 0.22$\%$ of its parameters and 6.66$\%$ of the mult-adds, achieves a 0.6\% higher Dice score and an 11.48$\times$ faster inference speed. Code is available at \url{https://github.com/xi-jia/Fourier-Net}.
translated by 谷歌翻译
由于其极端的长距离建模能力,基于视觉变压器的网络在可变形图像注册中变得越来越流行。但是,我们认为,5层卷积U-NET的接受场足以捕获准确的变形而无需长期依赖性。因此,这项研究的目的是研究与现代变压器的方法相比,将基于U-NET的方法用于医学图像注册时是否已过时。为此,我们通过将平行的卷积块嵌入香草U-NET以增强有效的接受场来提出一个大核U-NET(LKU-NET)。在公共3D IXI Brain Dataset上,用于基于ATLAS的注册,我们表明,香草U-NET的性能已经与基于最新的变压器网络(例如Transmorph)相提并论,并且提出的LKU-NET仅使用其参数的1.12%和其多添加操作的10.8%,优于Transmorph。我们进一步评估了MICCAI Learn2Reg 2021挑战数据集中的LKU-NET,以进行主题间注册,我们的LKU-NET在此数据集中也优于TransMorph,并且在此工作提交后,在公共排行榜上排名第一。只有对香草U-NET的适度修改,我们表明U-NET可以在基于主体间和基于ATLAS的3D医疗图像注册上胜过基于变压器的体系结构。代码可在https://github.com/xi-jia/lku-net上找到。
translated by 谷歌翻译
在图像识别中已广泛提出了生成模型,以生成更多图像,其中分布与真实图像相似。它通常会引入一个歧视网络,以区分真实数据与生成的数据。这样的模型利用了一个歧视网络,该网络负责以区分样式从目标数据集中包含的数据传输的数据。但是,这样做的网络着重于强度分布的差异,并可能忽略数据集之间的结构差异。在本文中,我们制定了一个新的图像到图像翻译问题,以确保生成的图像的结构类似于目标数据集中的图像。我们提出了一个简单但功能强大的结构不稳定的对抗(SUA)网络,该网络在执行图像分割时介绍了训练和测试集之间的强度和结构差异。它由空间变换块组成,然后是强度分布渲染模块。提出了空间变换块来减少两个图像之间的结构缝隙,还产生了一个反变形字段,以使最终的分段图像背部扭曲。然后,强度分布渲染模块将变形结构呈现到具有目标强度分布的图像。实验结果表明,所提出的SUA方法具有在多个数据集之间传递强度分布和结构含量的能力。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译